Dictionary Method

For the final two days we'll move to measuring the prevelence of themes in a corpus. We'll cover three ways of doing this: the dictionary method, supervised classification, and unsupervised machine learning. Today, dictionary method.

This is the most simple way to measure the prevelence of a theme in a corpus, and is used for many purposes, including sentiment analysis. This is one of the most long-standing, and ubiquitous, methods in automated text analysis, so it's important to both understand the method and be able to implement it.

The method is simple: it involves grouping words into categories or themes, and then counting the number of words from each theme in your corpus. We will use this method to do sentiment analysis, a popular text analysis task, on our Music Review corpus, using a standard sentiment analysis dictionary.

Learning Goals

  • Understand the intuition behind the dictionary method
  • Learn how to implement in via Python Pandas and NLTK
  • Get more comfortable combining Python packages together for more powerful analytic power
    • Today, we'll combine Pandas and NLTK
  • Implement a rudimentary sentiment analysis tool

Outline

  • Introduction to the Dictionary Method
  • Pre-Processing
    • Creat Pandas DF
    • Lowercase, remove punctuation, tokenize
    • Create column for token count
  • Sentiment Analysis using the Dictionary Method

Key Jargon

  • dictionary method:
    • text analysis method that utilizes the frequency of key words, grouped into themes, to determine the prevelance of that theme throughout a corpus.
  • standard dictionary:
    • otherwise known as general dictionaries, a dictionary created by experts meant to measure general phenomenon.
  • custom dictionary:
    • dictionaries tailored to a specific domain or question. Usually created by the researcher based on the research question.
  • sentiment analysis:
    • the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer's attitude towards a particular topic, product, etc., is positive, negative, or neutral.
  • lambda function:
    • A function that your write yourself. This is different than the built-in functions we have been using.

Further Resources

A Novel Method for Detecting Plot, Matt Jockers

Enns, Peter, Nathan Kelly, Jana Morgan, and Christopher Witko. 2015.“Money and the Supply of Political Rhetoric: Understanding the Congressional (Non-)Response to Economic Inequality.” Paper presented at the APSA Annual Meetings, San Francisco.

  • Outlines the process of creating your own dictionary

Neal Caren has a tutorial using MPQA, which implements the dictionary method in Python but in a much different way


0. Introduction to the Dictionary Method

The dictionary method is based on the assumption that themes or categories consist of a group of words, and texts that cover that theme will have a higher percentage of that group of words compared to other texts. Dictionary methods are used for many purposes. A few possibilities:

  • classify text into themes
  • measure the tone of text
  • measure sentiment
  • measure psychological processes

There are two forms of dictionaries: standard or general dictionaries, and custom dictionaries.

Standard Dictionaries

There are a number of standard dictionaries that have been created by field experts. The benefit of standarized dictionaries is that they're developed by experts and have been throughoughly validated. Others have likely published using these dictionaries, so reviewers are more likely to accept them as valid. Because of this, they are good options if they fit your research question.

Here are a few:

  • DICTION: a computer-aided text analysis program for determining the tone of a text. It was created by and for organization scholars and political scientists.
    • Main five categories: Certainty, Activity, Optimism, Realism, Commonality
    • 35 sub-categories
    • Allows you to create your own dictionary
    • Proprietary software
  • Linguistic Inquiry and Word Count (LIWC): Created by psychologists, it's meant to capture psychological processes around feelings, personality, and motivations. It's also proprietary.
  • Multi-Perspective Question Answering (MPQA): The free version of LIWC. We will use this dictionary today.
  • Harvard General Inquirer. Multiple categories, including abstract and concrete words. It's free and available online.

Custom Dictionaries

Many research questions or data are domain specific, however, and will thus require you to create your own dictionary based on your own knowledge of the domain and question. Creating your own dictionary requires a lot of thought, and must be validated. These dictionaries are typically created in an interative fashion, and are modified as they are validated. See Enns et al. (2015) for an example of how they constructed their own dictionary.

Today we will use the free and standard sentiment dictionary from MPQA to measure positive and negative sentiment in the music reviews.

Our first step, as with any technique, is the pre-processing step, to get the data ready for analyis.

1. Pre-Processing

First, read in our Music Reviews corpus as a Pandas dataframe.


In [ ]:
#import the necessary packages
import pandas
import nltk
from nltk import word_tokenize
import string

#read the Music Reviews corpus into a Pandas dataframe
df = pandas.read_csv("BDHSI2016_music_reviews.csv", sep = '\t')

#view the dataframe
df

The next step is to create a new column in our dataset that contains tokenized words with all the pre-processing steps.

The code here will look slightly different that lesson 1, as we're applying these functions to every row in our dataframe.


In [ ]:
#first create a new column called "body_tokens" and transform to lowercase by applying the string function str.lower()
df['body_tokens'] = df['body'].str.lower()

#make sure it worked
print(df[['body','body_tokens']])

Next we tokenize the text. To do this on a Pandas dataframe we need the apply function. This simply tells the computer to take the function in the parentheses,, apply it to each row in the dataframe, and assign the output to a new column.

There are two ways to do this. If it's a built-in function you're applying to the entire field, such as nltk.word_tokenize, you can simply put the function in the parentheses,. In some cases, you need to write your own function, called a lambda function. This is the case if you're applying something to a list (Pandas does not deal with list objects well. Hopefully someone smart will fix that). We'll get to that case below.


In [ ]:
#tokenize
df['body_tokens'] = df['body_tokens'].apply(nltk.word_tokenize)

#view output
print(df['body_tokens'])

In [ ]:
punctuations = list(string.punctuation)

#remove punctuation. Let's talk about that lambda x.
df['body_tokens'] = df['body_tokens'].apply(lambda x: [word for word in x if word not in punctuations])

#view output
print(df['body_tokens'])

Pre-processing is done. What other pre-processing steps might we use?

One more step before getting to the dictionary method. We want a total token count for each row, so we can normalize the dictionary counts. To do this we simply create a new column that contains the length of the token list in each row.


In [ ]:
df['token_count'] = df['body_tokens'].apply(len)

print(df[['body_tokens','token_count']])

2. Creating Dictionary Counts

I created two text files, one is a list of positive words from the MPQA dictionary, the other is a list of negative words. One word per line. Our goal here is to count the number of positive and negative words in each row of our dataframe, and add two columns to our dataset with the count of positive and negative words.

First, read in the positive and negative words and create list variables for each.


In [ ]:
pos_sent = open("positive_words.txt").read()
neg_sent = open("negative_words.txt").read()

#view part of the pos_sent variable, to see how it's formatted.
print(pos_sent[:101])

In [ ]:
#remember the split function? We'll split on the newline character (\n) to create a list
positive_words=pos_sent.split('\n')
negative_words=neg_sent.split('\n')

#view the first elements in the lists
print(positive_words[:10])
print(negative_words[:10])

In [ ]:
#count number of words in each list
print(len(positive_words))
print(len(negative_words))

Great! Now we can create two more columns that contain the number of positive words and negative words in the review tokens. I'm going to get creative with this, as we need to do this step in one line of code for positive and negative words, each. Your challenges:

  • Can you parse the code? We'll walk through it together.
  • Think of other ways you could do this same thing.

In [ ]:
#create column with the number of positive words
df['positive_tokens'] = df['body_tokens'].apply(lambda x: len([word for word in x if word in positive_words]))
df['negative_tokens'] = df['body_tokens'].apply(lambda x: len([word for word in x if word in negative_words]))

print(df[['token_count', 'positive_tokens', 'negative_tokens']])

That's the dictionary method! You can do this with any dictionary you want, standard or you can create your own.

2. Sentiment Analysis using the Dictionary Method

What can we do with this?

First, let's compare the overall sentiment of the reviews by genre.


In [ ]:
#use groupby function
df_genres = df.groupby('genre')

In [ ]:
##EX: Calculate the proportion of words that are positive for each genre.
###Hint: Use the sum() function. Proportion is just the total number of positive words divided by the total number of words.
###How do you calculate this using Pandas?

In [ ]:
##EX: Do the same for negative words. Which genre has the highest proportion of positive and negative words?

Compare these lists to the average score by genre.


In [ ]:
print(df_genres['score'].mean().sort_values(ascending=False))

Not bad. But this also illustrates potential problems with sentiment analysis, and the dictionary method in general.

Questions:

  • What are the draw-backs of this way of measuring sentiment?
  • How could we improve the measure?
  • How might you create your own dictionary?